234 research outputs found
An Efficient Multiple Object Vision Tracking System using Bipartite Graph Matching
For application domains like 11 vs. 11 robot soccer league, crowd surveillance and air traffic control, vision systems need to be able to identify and maintain information in real time about multiple objects as they move through an environment using video images. In this paper, we reduce the multi-object tracking problem to a bipartite graph matching and present efficient techniques that compute the optimal matching in real time. We demonstrate the robustness of our system on a task of tracking indistinguishable objects. One of the advantages of our tracking system is that it requires a much lower frame rate than standard tracking systems to reliably keep track of multiple objects
Induction of Topological Environment Maps from Sequences of Visited Places
In this paper we address the problem of topologically mapping environments which contain inherent perceptual aliasing caused by repeated environment structures. We propose an approach that does not use motion or odometric information but only a sequence of deterministic measurements observed by traversing an environment. Our algorithm implements a stochastic local search to build a small map which is consistent with local adjacency information extracted from a sequence of observations. Moreover, local adjacency information is incorporated to disambiguate places which are physically different but appear identical to the robots senses. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that it infers a small map quickly
Measuring Visual Consistency in 3D Rendering Systems
One of the major challenges facing a present day game development company is the removal of bugs from such complex virtual environments. This work presents an approach for measuring the correctness of synthetic scenes generated by a rendering system of a 3D application, such as a computer game. Our approach builds a database of labelled point clouds representing the spatiotemporal colour distribution for the objects present in a sequence of bug-free frames. This is done by converting the position that the pixels take over time into the 3D equivalent points with associated colours. Once the space of labelled points is built, each new image produced from the same game by any rendering system can be analysed by measuring its visual inconsistency in terms of distance from the database. Objects within the scene can be relocated (manually or by the application engine); yet the algorithm is able to perform the image analysis in terms of the 3D structure and colour distribution of samples on the surface of the object. We applied our framework to the publicly available game RacingGame developed for Microsoft(R) Xna(R). Preliminary results show how this approach can be used to detect a variety of visual artifacts generated by the rendering system in a professional quality game engine
Improved Reinforcement Learning with Curriculum
Humans tend to learn complex abstract concepts faster if examples are
presented in a structured manner. For instance, when learning how to play a
board game, usually one of the first concepts learned is how the game ends,
i.e. the actions that lead to a terminal state (win, lose or draw). The
advantage of learning end-games first is that once the actions which lead to a
terminal state are understood, it becomes possible to incrementally learn the
consequences of actions that are further away from a terminal state - we call
this an end-game-first curriculum. Currently the state-of-the-art machine
learning player for general board games, AlphaZero by Google DeepMind, does not
employ a structured training curriculum; instead learning from the entire game
at all times. By employing an end-game-first training curriculum to train an
AlphaZero inspired player, we empirically show that the rate of learning of an
artificial player can be improved during the early stages of training when
compared to a player not using a training curriculum.Comment: Draft prior to submission to IEEE Trans on Games. Changed paper
slightl
Uncertainty Analysis of a Landmark Initialization Method for Simultaneous Localization and Mapping
To operate successfully in any environment, mobile robots must be able to localize themselves accurately. In this paper, we describe a method to perform Simultaneous Localization and Mapping (SLAM) requiring only landmark bearing measurements taken along a linear trajectory. We solve the landmark initialization problem with only the assumption that the vision sensor of the robot can identify the landmarks and estimate their bearings. Contrary to existing approaches to landmark based navigation, we do not require any other sensors (like range sensors or wheel encoders) or the prior knowledge of relative distances between the landmarks. We provide an analysis of the uncertainty of the observations of the robot. In particular, we show how the uncertainty of the measurements is affected by a change of frames. That is, we determine what can an observer attached to a landmark frame deduce from the information transmitted by an observer attached to the robot frame. This SLAM system is ideally suited for the navigation of domestic robots such as autonomous lawn-mowers and vacuum cleaners.
Correlating eye gaze direction, depth and vehicle information on an interactive map for driver training
The over represented number of novice drivers involved in crashes is alarming. Driver training is one of the interventions aimed at mitigating the number of crashes that involve young drivers. Experienced drivers have better hazard perception ability compared to inexperienced drivers. Eye gaze patterns have been found to be an indicator of the driver's competency level. The aim of this paper is to develop an in-vehicle system which correlates information about the driver's gaze and vehicle dynamics, which is then used to assist driver trainers in assessing driving competency. This system allows visualization of the complete driving manoeuvre data on interactive maps. It uses an eye tracker and perspective projection algorithms to compute the depth of gaze and plots it on Google maps. This interactive map also features the trajectory of the vehicle and turn indicator usage. This system allows efficient and user friendly analysis of the driving task. It can be used by driver trainers and trainees to understand objectively the risks encountered during driving manoeuvres. This paper presents a prototype that plots the driver's eye gaze depth and direction on an interactive map along with the vehicle dynamics information. This prototype will be used in future to study the difference in gaze patterns in novice and experienced drivers prior to a certain manoeuvre
Fast Lexically Constrained Viterbi Algorithm (FLCVA): Simultaneous Optimization of Speed and Memory
Lexical constraints on the input of speech and on-line handwriting systems
improve the performance of such systems. A significant gain in speed can be
achieved by integrating in a digraph structure the different Hidden Markov
Models (HMM) corresponding to the words of the relevant lexicon. This
integration avoids redundant computations by sharing intermediate results
between HMM's corresponding to different words of the lexicon. In this paper,
we introduce a token passing method to perform simultaneously the computation
of the a posteriori probabilities of all the words of the lexicon. The coding
scheme that we introduce for the tokens is optimal in the information theory
sense. The tokens use the minimum possible number of bits. Overall, we optimize
simultaneously the execution speed and the memory requirement of the
recognition systems.Comment: 5 pages, 2 figures, 4 table
A vision based target detection system for docking of an autonomous underwater vehicle
This paper describes the development and preliminary experimental evaluation of a visionbased docking system to allow an Autonomous Underwater Vehicle (AUV) to identify and attach itself to a set of uniquely identifiable targets. These targets, docking poles, are detected using Haar rectangular features and rotation of integral images. A non-holonomic controller allows the Starbug AUV to orient itself with respect to the target whilst maintaining visual contact during the manoeuvre. Experimental results show the proposed vision system is capable of robustly identifying a pair of docking poles simultaneously in a variety of orientations and lighting conditions. Experiments in an outdoor pool show that this vision system enables the AUV to dock autonomously from a distance of up to 4m with relatively low visibility
How consistent are drivers in their driving? A driver training perspective
The value and effectiveness of driver training as a means of improving driver behaviour and road safety continues to fuel research and societal debates. Knowledge about what are the characteristics of safe driving that need to be learnt is extensive. Research has shown that young drivers are over represented in crash statistics. The encouraging fact is that novice drivers have shown improvement in road scanning pattern after training. This paper presents a driver behaviour study conducted on a closed circuit track. A group of experienced and novice drivers performed repeated multiple manoeuvres (i.e. turn, overtake and lane change) under identical conditions Variables related to the driver, vehicle and environment were recorded in a research vehicle equipped with multiple in-vehicle sensors such as GPS accelerometers, vision processing, eye tracker and laser scanner. Each group exhibited consistently a set of driving pattern characterising a particular group. Behaviour such as the indicator usage before lane change, following distance while performing a manoeuvre were among the consistent observed behaviour differentiating novice from experienced drivers. This paper will highlight the results of our study and emphasize the need for effective driver training programs focusing on young and novice drivers
- …